Goto

Collaborating Authors

 universal basic income


Rethinking Sparse Autoencoders: Select-and-Project for Fairness and Control from Encoder Features Alone

Bărbălau, Antonio, Păduraru, Cristian Daniel, Poncu, Teodor, Tifrea, Alexandru, Burceanu, Elena

arXiv.org Artificial Intelligence

Sparse Autoencoders (SAEs) are widely employed for mechanistic interpretabil-ity and model steering. Within this context, steering is by design performed by means of decoding altered SAE intermediate representations. In contrast to existing literature, we forward an encoder-centric alternative to model steering which demonstrates a stronger cross-modal performance. We introduce S&P T op-K, a retraining-free and computationally lightweight Selection and Projection framework that identifies T op-K encoder features aligned with a sensitive attribute or behavior, optionally aggregates them into a single control axis, and computes an orthogonal projection to be subsequently applied directly in the model's native embedding space. In vision-language models, it improves fairness metrics on CelebA and FairFace by up to 3.2 times over conventional SAE usage, and in large language models, it substantially reduces aggressiveness and sycophancy in Llama-3 8B Instruct, achieving up to 3.6 times gains over masked reconstruction. These findings suggest that encoder-centric interventions provide a general, efficient, and more effective mechanism for shaping model behavior at inference time than the traditional decoder-centric use of SAEs.Figure 1: Sample generation demonstrating behavioral steering interventions on Llama 3 8B Instruct prompted to produce a sycophantic opinion. We apply two Sparse Autoencoder (SAE)-based methods to remove sycophancy: the conventional decoder-centric Masked Reconstruction approach and our proposed encoder-centric S&P Top-K protocol. Lower LLM-as-a-judge sycophancy scores indicate superior mitigation of the targeted behavioral pattern. The results illustrate that conventional Masked Reconstruction fails to suppress sycophantic behavior, while our S&P Top-K intervention successfully redirects the model's output, eliminating direct praise, repeatedly deferring endorsement, and leading the model to ultimately employ laudatory language in a sarcastic manner that subverts the original sycophantic intent. The main steps of our approach are highlighted in green. We first employ a selection mechanism to identify relevant SAE features.


An AI Capability Threshold for Rent-Funded Universal Basic Income in an AI-Automated Economy

Nayebi, Aran

arXiv.org Artificial Intelligence

We derive the first closed-form condition under which artificial intelligence (AI) capital profits could sustainably finance a universal basic income (UBI) without relying on new taxation or the creation of new jobs. In a Solow-Zeira task-automation economy with a CES aggregator $σ< 1$, we introduce an AI capability parameter that scales the productivity of automatable tasks and obtain a tractable expression for the AI capability threshold -- the minimum productivity of AI relative to pre-AI automation required for a balanced transfer. Using current U.S. economic parameters, we find that even in the conservative scenario where no new tasks or jobs emerge, AI systems would only need to reach only 5-7 times today's automation productivity to fund an 11%-of-GDP UBI. Our analysis also reveals some specific policy levers: raising public revenue share (e.g. profit taxation) of AI capital from the current 15% to about 33% halves the required AI capability threshold to attain UBI to 3 times existing automation productivity, but gains diminish beyond 50% public revenue share, especially if regulatory costs increase. Market structure also strongly affects outcomes: monopolistic or concentrated oligopolistic markets reduce the threshold by increasing economic rents, whereas heightened competition significantly raises it. These results therefore offer a rigorous benchmark for assessing when advancing AI capabilities might sustainably finance social transfers in an increasingly automated economy.


AI-driven Automation as a Pre-condition for Eudaimonia

Siapka, Anastasia

arXiv.org Artificial Intelligence

The automation of work, understood as the process by which human labour is replaced by machines, is also a cause for scholarly concern across different disciplines. For some scholars, the large-scale deployment of AI in the workplace amounts to a'Fourth Industrial Revolution' or a'Second Machine Age', threatening to render human work--nay, humankind in its entirety--obsolete [3],[6]. Even despite the potential introduction of a Universal Basic Income (UBI), which could in principle guarantee citizens' livelihood, it is argued that policymakers would still need to safeguard work, since it bears intrinsic value that transcends the instrumental value of a paycheck [8]. AI-driven automation is, hence, largely framed as a threat to be counteracted by law. Nonetheless, the axiological superiority of work as an intrinsically valuable activity and the insistence on its preservation, even if humans' sustenance could be otherwise secured, should not be taken for granted.


Money for nothing: is universal basic income about to transform society?

The Guardian

When Elinor O'Donovan found out she had been randomly selected to participate in a basic income pilot scheme, she couldn't believe her luck. In return for a guaranteed salary of just over 1,400 ( 1,200) a month from the Irish government, all the 27-year-old artist had to do was fill out a bi-annual questionnaire about her wellbeing and how she spends her time. "It was like winning the lottery. I was in such disbelief," she says. The income, which she will receive until September 2025, has enabled her to give up temping and focus instead on her art.


AI is coming for our jobs! Could universal basic income be the solution?

The Guardian

The idea of a guaranteed income for all has been floating around for centuries, its popularity ebbing and flowing with the passing tide of current events. While it is still considered by many to be a radical concept, proponents of a universal basic income (UBI) no longer see it only as a solution to poverty but as the answer to some of the biggest threats faced by modern workers: wage inequality, job insecurity – and the looming possibility of AI-induced job losses. Elon Musk, at the recent Bletchley Park summit, said he believed "no job is needed" due to the development of AI, and that a job can be for "personal satisfaction". Economist and political theorist Karl Widerquist, professor of philosophy at Georgetown University-Qatar, sees it differently. "Even if AI takes your job away, you don't necessarily just become unemployed for the rest of your life," he says.


What to Know About Worldcoin and the Controversy Around It

TIME - Tech

Over the past few months, shiny metallic orbs have materialized cities around the world, from New York to Berlin to Tokyo. Its detractors slam them as invasive, dystopian and exploitative. Welcome to the rollout of Worldcoin, an AI-meets-crypto project from OpenAI founder Sam Altman that has stirred endless controversy. The startup uses orbs to scan people's eyes in exchange for a digital ID and possibly some cryptocurrency, depending on what country they live in. Altman and his co-founder Alex Blania hope that Worldcoin will provide a new solution to online identity in a digital landscape rife with scams, bots and even AI imposters.


'What should the limits be?' The father of ChatGPT on whether AI will save humanity – or destroy it

The Guardian

When I meet Sam Altman, the chief executive of AI research laboratory OpenAI, he is in the middle of a world tour. He is preaching that the very AI systems he and his competitors are building could pose an existential risk to the future of humanity – unless governments work together now to establish guide rails, ensuring responsible development over the coming decade. In the subsequent days, he and hundreds of tech leaders, including scientists and "godfathers of AI", Geoffrey Hinton and Yoshua Bengio, as well as Google's DeepMind CEO, Demis Hassabis, put out a statement saying that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". It is an all-out effort to convince world leaders that they are serious when they say that "AI risk" needs concerted international effort. It must be an interesting position to be in – Altman, 38, is the daddy of AI chatbot ChatGPT, after all, and is leading the charge to create "artificial general intelligence", or AGI, an AI system capable of tackling any task a human can achieve.


Super AGI and the Matrix: Sophia the Robot co-creator predicts economic 'mayhem' on road to AI utopia

FOX News

Ben Goertzel said the sky's "not even the limit" when it comes to the potential impact of artificial general intelligence. The co-creator of the social humanoid robot Sophia says artifical general intelligence (AGI) and super AGI are mere decades away, and he warns that the subsequent disruption from these artificial intelligence (AI) models will cause a significant amount of political and economic "mayhem" before massive benefits to humanity are seen. Speaking with Fox News Digital on the global aspects of the transition from the present day to AGI, Dr. Ben Goertzel highlighted the need to develop a beneficial, compassionate super general intelligence model to ensure humanity flourishes. Often referred to as the "singularity" – the point AGI exceeds human intelligence and reasoning – humankind will be at the whim of the AI model's motivations and behaviors. AI researchers and futurologists have repeatedly said that this inflection point is still decades away. Given the current timeline of AI acceleration, Goertzel concurred with friend and computer scientist Ray Kurzweil, calling it a "fair approximation" that human-level AGI will be created around 2029.


Will A.I. Become the New McKinsey?

The New Yorker

When we talk about artificial intelligence, we rely on metaphor, as we always do when dealing with something new and unfamiliar. Metaphors are, by their nature, imperfect, but we still need to choose them carefully, because bad ones can lead us astray. For example, it's become very common to compare powerful A.I.s to genies in fairy tales. The metaphor is meant to highlight the difficulty of making powerful entities obey your commands; the computer scientist Stuart Russell has cited the parable of King Midas, who demanded that everything he touched turn into gold, to illustrate the dangers of an A.I. doing what you tell it to do instead of what you want it to do. There are multiple problems with this metaphor, but one of them is that it derives the wrong lessons from the tale to which it refers.


AI Creeps Closer to Automation, But Could This Displace Workers? - Top Crypto News

#artificialintelligence

New AI initiatives are utilizing synthetic intelligence to streamline the event course of by automating repetitive duties. While the purpose is to optimize effectivity, issues have been raised concerning the potential affect on employment charges and the economic system. This article explores the price of AI effectivity and the potential want for Universal Basic Income (UBI) as an answer. UXOS AI and different comparable initiatives are working to streamline the event course of and scale back growth time and value. It operates on the Binance Smart Chain community and creates a custom-made set of instruments to automate the event course of. While this strategy can considerably improve effectivity and pace up mission completion, there are issues concerning the potential affect on employment charges and the economic system usually.